information mass
Information Gravity: A Field-Theoretic Model for Token Selection in Large Language Models
Large language models (LLMs) have revolutionized the field of artificial intelligence, demonstrating text understanding and generation capabilities approaching human levels. However, despite impressive results, the internal functioning mechanisms of these models largely remain a "black box." As Amodei [1] notes in his essay "The Urgency of Interpretability," researchers have limited understanding of why LLMs generate specific responses and how they arrive at their conclusions. This lack of transparency becomes increasingly problematic as LLMs begin to play central roles in economics, technology, and national security. Of particular concern are phenomena such as unpredictable hallucinations, extreme sensitivity to query formulations, and puzzling patterns in the probability distributions of generated tokens. These phenomena not only limit the reliability of LLMs in critical applications but also point to fundamental gaps in our understanding of their operation.
Rethinking LLM-based Preference Evaluation
Hu, Zhengyu, Song, Linxin, Zhang, Jieyu, Xiao, Zheyuan, Wang, Jingang, Chen, Zhenyu, Zhao, Jieyu, Xiong, Hui
Recently, large language model (LLM)-based preference evaluation has been widely adopted to compare pairs of model responses. However, a severe bias towards lengthy responses has been observed, raising concerns about the reliability of this evaluation method. In this work, we designed a series of controlled experiments to study the major impacting factors of the metric of LLM-based preference evaluation, i.e., win rate, and conclude that the win rate is affected by two axes of model response: desirability and information mass, where the former is length-independent and related to trustworthiness, and the latter is length-dependent and can be represented by conditional entropy. We find that length impacts the existing evaluations by influencing information mass. However, a reliable evaluation metric should not only assess content quality but also ensure that the assessment is not confounded by extraneous factors such as response length. Therefore, we propose a simple yet effective adjustment, AdapAlpaca, to the existing practice of win rate measurement. Specifically, by adjusting the lengths of reference answers to match the test model's answers within the same interval, we debias information mass relative to length, ensuring a fair model evaluation.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)